• Àüü
  • ÀüÀÚ/Àü±â
  • Åë½Å
  • ÄÄÇ»ÅÍ
´Ý±â

»çÀÌÆ®¸Ê

Loading..

Please wait....

±¹³» ³í¹®Áö

Ȩ Ȩ > ¿¬±¸¹®Çå > ±¹³» ³í¹®Áö > Çѱ¹Á¤º¸°úÇÐȸ ³í¹®Áö > Á¤º¸°úÇÐȸ ÄÄÇ»ÆÃÀÇ ½ÇÁ¦ ³í¹®Áö (KIISE Transactions on Computing Practices)

Á¤º¸°úÇÐȸ ÄÄÇ»ÆÃÀÇ ½ÇÁ¦ ³í¹®Áö (KIISE Transactions on Computing Practices)

Current Result Document :

ÇѱÛÁ¦¸ñ(Korean Title) ´ÙÁß ÀÎÄÚ´õ Transformer ±â¹Ý ¹ø¿ª¹® ÀÚµ¿ »çÈÄ ±³Á¤ ¸ðµ¨ÀÇ µðÄÚ´õ ÁÖÀÇ ±¸Á¶ ¿¬±¸
¿µ¹®Á¦¸ñ(English Title) Research on the Decoder Attention Structure of Multi-encoder Transformer-based Automatic Post-Editing Model
ÀúÀÚ(Author) ½ÅÀçÈÆ   ÀÌ¿ø±â   ±è¿µ±æ   ÇÑÈ¿Á¤   ÀÌÁ¾Çõ   Jaehun Shin   WonKee Lee   Youngkil Kim   H.Jeung Han   Jong-Hyeok Lee  
¿ø¹®¼ö·Ïó(Citation) VOL 26 NO. 08 PP. 0367 ~ 0372 (2020. 08)
Çѱ۳»¿ë
(Korean Abstract)
¹ø¿ª¹® ÀÚµ¿ »çÈÄ ±³Á¤Àº ±â°è ¹ø¿ª ½Ã½ºÅÛÀÇ °á°ú¹°À» ÀÚµ¿À¸·Î ±³Á¤ÇÏ¿© ´õ ³ªÀº ¹ø¿ª¹®À» ¸¸µé¾î³»´Â °úÁ¤À¸·Î, ±â°è ¹ø¿ª ½Ã½ºÅÛ ¿ÜÀûÀ¸·Î ±â°è ¹ø¿ªÀÇ Ç°Áú Çâ»óÀ» ¼öÇàÇϱâ À§ÇØ Á¦¾ÈµÈ ¿¬±¸ ºÐ¾ßÀÌ´Ù. º» ³í¹®¿¡¼­´Â ¹ø¿ª¹® ÀÚµ¿ »çÈÄ ±³Á¤ ¹®Á¦¿¡ »ç¿ëµÇ´Â ´ÙÁß ÀÎÄÚ´õ Transformer ±â¹Ý ±³Á¤ ¸ðµ¨ÀÇ ±âº» ±¸Á¶¸¦ »ìÆ캻 µÚ, µðÄÚ´õ¿¡¼­ ÀÎÄÚ´õ Ãâ·Â°úÀÇ »óÈ£ ÀÇÁ¸¼ºÀ» ´ã´çÇÏ´Â ÁÖÀÇ ±¸Á¶¸¦ ´Ù¾çÇÏ°Ô ±¸¼ºÇÏ°í Àû¿ëÇØ º¸¾Ò´Ù. WMT18 »çÈÄ ±³Á¤ ¸»¹¶Ä¡¸¦ ÀÌ¿ëÇÑ ½ÇÇè¿¡¼­´Â ´ÙÁß ÀÎÄÚ´õ Transformer¸¦ ÀÌ¿ëÇÑ ¸ðµ¨ ÀüºÎ°¡ ±â°è ¹ø¿ª ½Ã½ºÅÛÀÇ °á°ú¹°¿¡ ºñÇØ ´õ ³ªÀº Ç°ÁúÀÇ ¹®ÀåÀ» »ý¼ºÇÏ¿´À¸¸ç, µðÄÚ´õ ³»¿¡ ¹ø¿ª¹®¿¡ ¿ø¹®ÀÇ ¹®¸Æ Á¤º¸¸¦ ¹Ý¿µÇÏ´Â ±¸Á¶¸¦ Àû¿ëÇÏ´Â °ÍÀÌ »çÈÄ ±³Á¤ÀÇ ¼º´É Çâ»ó¿¡ Å©°Ô ±â¿©ÇÔÀ» È®ÀÎÇÒ ¼ö ÀÖ¾ú´Ù
¿µ¹®³»¿ë
(English Abstract)
Automatic Post-Editing (APE) is a study on the correction for the output of machine translation (MT) system for the purpose of improving its translation quality independent of the MT system itself. In this paper, we examine the basic architecture of a multi-encoder transformer-based APE model, and implement several variants of the system¡¯s encoder-decoder attention layer which takes the outputs of the multi-encoder as its inputs. In experiments with the WMT18 APE data, all variations on our model successfully improve the translation quality of the original MT outputs. In particular, we find modeling the attention to incorporate source sentence context into the translated sentence improves post-editing performance.
Å°¿öµå(Keyword) ±â°è ¹ø¿ª   ¹ø¿ª¹® ÀÚµ¿ »çÈÄ ±³Á¤   Æ®·£½ºÆ÷¸Ó   ´ÙÁß ÀÎÄÚ´õ ±¸Á¶   ÁÖÀÇ ÁýÁß ±â¹ý   machine translation   automatic post-editing   transformer   attention mechanism   multi-encoder architecture  
ÆÄÀÏ÷ºÎ PDF ´Ù¿î·Îµå